11 research outputs found

    Protosymbols that integrate recognition and response

    Get PDF
    We explore two controversial hypotheses through robotic implementation: (1) Processes involved in recognition and response are tightly coupled both in their operation and epigenesis; and (2) processes involved in symbol emergence should respect the integrity of recognition and response while exploiting the periodicity of biological motion. To that end, this paper proposes a method of recognizing and generating motion patterns based on nonlinear principal component neural networks that are constrained to model both periodic and transitional movements. The method is evaluated by an examination of its ability to segment and generalize different kinds of soccer playing activity during a RoboCup match

    Periodic nonlinear principal component neural networks for humanoid motion segmentation, generalization, and generation

    Full text link
    In an experiment with a soccer playing robot, peri-odic temporally-constrained nonlinear principal compo-nent neural networks (NLPCNNs) are shown to character-ize humanoid motion effectively by exploiting fundamental sensorimotor relationships. Each network learns a periodic or transitional trajectory in a phase space of possible ac-tions, and thus abstracts a kind of protosymbol. NLPCNNs can play a key role in a system that learns to imitate peo-ple, enabling a robot to recognize the behavior of others because it has grounded that behavior in terms of its own bodily movements. 1

    Dynamic imitation in a humanoid robot through nonparametric probabilistic inference

    No full text
    Abstract β€” We tackle the problem of learning imitative wholebody motions in a humanoid robot using probabilistic inference in Bayesian networks. Our inference-based approach affords a straightforward method to exploit rich yet uncertain prior information obtained from human motion capture data. Dynamic imitation implies that the robot must interact with its environment and account for forces such as gravity and inertia during imitation. Rather than explicitly modeling these forces and the body of the humanoid as in traditional approaches, we show that stable imitative motion can be achieved by learning a sensorbased representation of dynamic balance. Bayesian networks provide a sound theoretical framework for combining prior kinematic information (from observing a human demonstrator) with prior dynamic information (based on previous experience) to model and subsequently infer motions which, with high probability, will be dynamically stable. By posing the problem as one of inference in a Bayesian network, we show that methods developed for approximate inference can be leveraged to efficiently perform inference of actions. Additionally, by using nonparametric inference and a nonparametric (Gaussian process) forward model, our approach does not make any strong assumptions about the physical environment or the mass and inertial properties of the humanoid robot. We propose an iterative, probabilistically constrained algorithm for exploring the space of motor commands and show that the algorithm can quickly discover dynamically stable actions for whole-body imitation of human motion. Experimental results based on simulation and subsequent execution by a HOAP-2 humanoid robot demonstrate that our algorithm is able to imitate a human performing actions such as squatting and a one-legged balance. I
    corecore